Goto

Collaborating Authors

 mean and variance


Avoiding Non-Integrable Beliefs in Expectation Propagation

Zhao, Zilu, Chen, Jichao, Slock, Dirk

arXiv.org Machine Learning

Expectation Propagation (EP) is a widely used iterative message-passing algorithm that decomposes a global inference problem into multiple local ones. It approximates marginal distributions as ``beliefs'' using intermediate functions called ``messages''. It has been shown that the stationary points of EP are the same as corresponding constrained Bethe Free Energy (BFE) optimization problem. Therefore, EP is an iterative method of optimizing the constrained BFE. However, the iterative method may fall out of the feasible set of the BFE optimization problem, i.e., the beliefs are not integrable. In most literature, the authors use various methods to keep all the messages integrable. In most Bayesian estimation problems, limiting the messages to be integrable shrinks the actual feasible set. Furthermore, in extreme cases where the factors are not integrable, making the message itself integrable is not enough to have integrable beliefs. In this paper, two EP frameworks are proposed to ensure that EP has integrable beliefs. Both of the methods allows non-integrable messages. We then investigate the signal recovery problem in Generalized Linear Model (GLM) using our proposed methods.






sidedCalibrationTheorem

Neural Information Processing Systems

Theorem 2. Suppose that the predictive distribution Q has the sufficient ability to approximate the true unknown distribution P, given data is i.i.d. Lm(P,Q) = 0 if and only if P = Q when F is a unit ball in a universal RKHS [13]. Becausetheconfidencelevelp2 p1 is exactly equal to the proportion of samples {y1,,yn} covered by the two-sided prediction interval. B.1 Baselines MC-Dropout (MCD) [12]: A variant of standard dropout, named as Monte-Carlo Dropout. Heteroscedastic Neural Network (HNN) [17]: In this approach, similar to a heteroscedastic regression, the network has two outputs in the last layer, corresponding to the predicted mean and variance for each input xi.